From 5163cdecd1cfdcf5106ac17e566da3cd024ddb06 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?M=C3=A1ty=C3=A1s=20Aradi?= Date: Tue, 6 Aug 2024 14:02:23 +0200 Subject: [PATCH] Update driver API --- docs/how-to/hip_porting_driver_api.md | 304 -------------- docs/how-to/hip_porting_driver_api.rst | 505 ++++++++++++++++++++++++ docs/index.md | 2 + docs/reference/driver_api_reference.rst | 34 ++ docs/sphinx/_toc.yml.in | 2 + docs/understand/driver_api.rst | 21 + 6 files changed, 564 insertions(+), 304 deletions(-) delete mode 100644 docs/how-to/hip_porting_driver_api.md create mode 100644 docs/how-to/hip_porting_driver_api.rst create mode 100644 docs/reference/driver_api_reference.rst create mode 100644 docs/understand/driver_api.rst diff --git a/docs/how-to/hip_porting_driver_api.md b/docs/how-to/hip_porting_driver_api.md deleted file mode 100644 index 57879264a2..0000000000 --- a/docs/how-to/hip_porting_driver_api.md +++ /dev/null @@ -1,304 +0,0 @@ -# Porting CUDA driver API - -## Introduction to the CUDA Driver and Runtime APIs - -CUDA provides a separate CUDA Driver and Runtime APIs. The two APIs have significant overlap in functionality: - -* Both APIs support events, streams, memory management, memory copy, and error handling. -* Both APIs deliver similar performance. -* Driver APIs calls begin with the prefix `cu` while Runtime APIs begin with the prefix `cuda`. For example, the Driver API API contains `cuEventCreate` while the Runtime API contains `cudaEventCreate`, with similar functionality. -* The Driver API defines a different but largely overlapping error code space than the Runtime API, and uses a different coding convention. For example, Driver API defines `CUDA_ERROR_INVALID_VALUE` while the Runtime API defines `cudaErrorInvalidValue` - -The Driver API offers two additional pieces of functionality not provided by the Runtime API: `cuModule` and `cuCtx` APIs. - -### `cuModule` API - -The Module section of the Driver API provides additional control over how and when accelerator code objects are loaded. -For example, the driver API allows code objects to be loaded from files or memory pointers. -Symbols for kernels or global data can be extracted from the loaded code objects. -In contrast, the Runtime API automatically loads and (if necessary) compiles all of the kernels from an executable binary when run. -In this mode, NVCC must be used to compile kernel code so the automatic loading can function correctly. - -Both Driver and Runtime APIs define a function for launching kernels (called `cuLaunchKernel` or `cudaLaunchKernel`. -The kernel arguments and the execution configuration (grid dimensions, group dimensions, dynamic shared memory, and stream) are passed as arguments to the launch function. -The Runtime additionally provides the `<<< >>>` syntax for launching kernels, which resembles a special function call and is easier to use than explicit launch API (in particular with respect to handling of kernel arguments). -However, this syntax is not standard C++ and is available only when NVCC is used to compile the host code. - -The Module features are useful in an environment which generates the code objects directly, such as a new accelerator language front-end. -Here, NVCC is not used. Instead, the environment may have a different kernel language or different compilation flow. -Other environments have many kernels and do not want them to be all loaded automatically. -The Module functions can be used to load the generated code objects and launch kernels. -As we will see below, HIP defines a Module API which provides similar explicit control over code object management. - -### `cuCtx` API - -The Driver API defines "Context" and "Devices" as separate entities. -Contexts contain a single device, and a device can theoretically have multiple contexts. -Each context contains a set of streams and events specific to the context. -Historically contexts also defined a unique address space for the GPU, though this may no longer be the case in Unified Memory platforms (since the CPU and all the devices in the same process share a single unified address space). -The Context APIs also provide a mechanism to switch between devices, which allowed a single CPU thread to send commands to different GPUs. -HIP as well as a recent versions of CUDA Runtime provide other mechanisms to accomplish this feat - for example using streams or `cudaSetDevice`. - -The CUDA Runtime API unifies the Context API with the Device API. This simplifies the APIs and has little loss of functionality since each Context can contain a single device, and the benefits of multiple contexts has been replaced with other interfaces. -HIP provides a context API to facilitate easy porting from existing Driver codes. -In HIP, the `Ctx` functions largely provide an alternate syntax for changing the active device. - -Most new applications will prefer to use `hipSetDevice` or the stream APIs, therefore HIP has marked `hipCtx` APIs as **deprecated**. Support for these APIs may not be available in future releases. For more details on deprecated APIs please refer [HIP deprecated APIs](../reference/deprecated_api_list). - -## HIP Module and `Ctx` APIs - -Rather than present two separate APIs, HIP extends the HIP API with new APIs for Modules and `Ctx` control. - -### `hipModule` API - -Like the CUDA Driver API, the Module API provides additional control over how code is loaded, including options to load code from files or from in-memory pointers. -NVCC and HIP-Clang target different architectures and use different code object formats: NVCC is `cubin` or `ptx` files, while the HIP-Clang path is the `hsaco` format. -The external compilers which generate these code objects are responsible for generating and loading the correct code object for each platform. -Notably, there is not a fat binary format that can contain code for both NVCC and HIP-Clang platforms. The following table summarizes the formats used on each platform: - -| Format | APIs | NVCC | HIP-CLANG | -| --- | --- | --- | --- | -| Code Object | `hipModuleLoad`, `hipModuleLoadData` | `.cubin` or PTX text | `.hsaco` | -| Fat Binary | `hipModuleLoadFatBin` | `.fatbin` | `.hip_fatbin` | - -`hipcc` uses HIP-Clang or NVCC to compile host codes. Both of these may embed code objects into the final executable, and these code objects will be automatically loaded when the application starts. -The `hipModule` API can be used to load additional code objects, and in this way provides an extended capability to the automatically loaded code objects. -HIP-Clang allows both of these capabilities to be used together, if desired. Of course it is possible to create a program with no kernels and thus no automatic loading. - -### `hipCtx` API - -HIP provides a `Ctx` API as a thin layer over the existing Device functions. This `Ctx` API can be used to set the current context, or to query properties of the device associated with the context. -The current context is implicitly used by other APIs such as `hipStreamCreate`. - -### hipify translation of CUDA Driver API - -The HIPIFY tools convert CUDA Driver APIs for streams, events, modules, devices, memory management, context, profiler to the equivalent HIP driver calls. For example, `cuEventCreate` will be translated to `hipEventCreate`. -HIPIFY tools also convert error codes from the Driver namespace and coding convention to the equivalent HIP error code. Thus, HIP unifies the APIs for these common functions. - -The memory copy API requires additional explanation. The CUDA driver includes the memory direction in the name of the API (`cuMemcpyH2D`) while the CUDA driver API provides a single memory copy API with a parameter that specifies the direction and additionally supports a "default" direction where the runtime determines the direction automatically. -HIP provides APIs with both styles: for example, `hipMemcpyH2D` as well as `hipMemcpy`. -The first flavor may be faster in some cases since they avoid host overhead to detect the different memory directions. - -HIP defines a single error space, and uses camel-case for all errors (i.e. `hipErrorInvalidValue`). - -#### Address Spaces - -HIP-Clang defines a process-wide address space where the CPU and all devices allocate addresses from a single unified pool. -Thus addresses may be shared between contexts, and unlike the original CUDA definition a new context does not create a new address space for the device. - -#### Using `hipModuleLaunchKernel` - -`hipModuleLaunchKernel` is `cuLaunchKernel` in HIP world. It takes the same arguments as `cuLaunchKernel`. - -#### Additional Information - -* HIP-Clang creates a primary context when the HIP API is called. So in a pure driver API code, HIP-Clang will create a primary context while HIP/NVCC will have empty context stack. -HIP-Clang will push primary context to context stack when it is empty. This can have subtle differences on applications which mix the runtime and driver APIs. - -### `hip-clang` Implementation Notes - -#### `.hip_fatbin` - -hip-clang links device code from different translation units together. For each device target, a code object is generated. Code objects for different device targets are bundled by `clang-offload-bundler` as one fatbinary, which is embeded as a global symbol `__hip_fatbin` in the `.hip_fatbin` section of the ELF file of the executable or shared object. - -#### Initialization and Termination Functions - -hip-clang generates initialization and termination functions for each translation unit for host code compilation. The initialization functions call `__hipRegisterFatBinary` to register the fatbinary embeded in the ELF file. They also call `__hipRegisterFunction` and `__hipRegisterVar` to register kernel functions and device side global variables. The termination functions call `__hipUnregisterFatBinary`. -hip-clang emits a global variable `__hip_gpubin_handle` of void** type with linkonce linkage and inital value 0 for each host translation unit. Each initialization function checks `__hip_gpubin_handle` and register the fatbinary only if `__hip_gpubin_handle` is 0 and saves the return value of `__hip_gpubin_handle` to `__hip_gpubin_handle`. This is to guarantee that the fatbinary is only registered once. Similar check is done in the termination functions. - -#### Kernel Launching - -hip-clang supports kernel launching by CUDA `<<<>>>` syntax, hipLaunchKernelGGL. The latter one is macro which expand to CUDA `<<<>>>` syntax. - -When the executable or shared library is loaded by the dynamic linker, the initialization functions are called. In the initialization functions, when `__hipRegisterFatBinary` is called, the code objects containing all kernels are loaded; when `__hipRegisterFunction` is called, the stub functions are associated with the corresponding kernels in code objects. - -hip-clang implements two sets of kernel launching APIs. - -By default, in the host code, for the `<<<>>>` statement, hip-clang first emits call of `hipConfigureCall` to set up the threads and grids, then emits call of the stub function with the given arguments. In the stub function, `hipSetupArgument` is called for each kernel argument, then `hipLaunchByPtr` is called with a function pointer to the stub function. In `hipLaunchByPtr`, the real kernel associated with the stub function is launched. - -### NVCC Implementation Notes - -#### Interoperation between HIP and CUDA Driver - -CUDA applications may want to mix CUDA driver code with HIP code (see example below). This table shows the type equivalence to enable this interaction. - -|**HIP Type** |**CU Driver Type**|**CUDA Runtime Type**| -| ---- | ---- | ---- | -| `hipModule_t` | `CUmodule` | | -| `hipFunction_t` | `CUfunction` | | -| `hipCtx_t` | `CUcontext` | | -| `hipDevice_t` | `CUdevice` | | -| `hipStream_t` | `CUstream` | `cudaStream_t` | -| `hipEvent_t` | `CUevent` | `cudaEvent_t` | -| `hipArray` | `CUarray` | `cudaArray` | - -#### Compilation Options - -The `hipModule_t` interface does not support `cuModuleLoadDataEx` function, which is used to control PTX compilation options. -HIP-Clang does not use PTX and does not support these compilation options. -In fact, HIP-Clang code objects always contain fully compiled ISA and do not require additional compilation as a part of the load step. -The corresponding HIP function `hipModuleLoadDataEx` behaves as `hipModuleLoadData` on HIP-Clang path (compilation options are not used) and as `cuModuleLoadDataEx` on NVCC path. -For example (CUDA): - -```cpp -CUmodule module; -void *imagePtr = ...; // Somehow populate data pointer with code object - -const int numOptions = 1; -CUJit_option options[numOptions]; -void * optionValues[numOptions]; - -options[0] = CU_JIT_MAX_REGISTERS; -unsigned maxRegs = 15; -optionValues[0] = (void*)(&maxRegs); - -cuModuleLoadDataEx(module, imagePtr, numOptions, options, optionValues); - -CUfunction k; -cuModuleGetFunction(&k, module, "myKernel"); -``` - -HIP: - -```cpp -hipModule_t module; -void *imagePtr = ...; // Somehow populate data pointer with code object - -const int numOptions = 1; -hipJitOption options[numOptions]; -void * optionValues[numOptions]; - -options[0] = hipJitOptionMaxRegisters; -unsigned maxRegs = 15; -optionValues[0] = (void*)(&maxRegs); - -// hipModuleLoadData(module, imagePtr) will be called on HIP-Clang path, JIT options will not be used, and -// cupModuleLoadDataEx(module, imagePtr, numOptions, options, optionValues) will be called on NVCC path -hipModuleLoadDataEx(module, imagePtr, numOptions, options, optionValues); - -hipFunction_t k; -hipModuleGetFunction(&k, module, "myKernel"); -``` - -The below sample shows how to use `hipModuleGetFunction`. - -```cpp -#include -#include -#include -#include -#include - -#define LEN 64 -#define SIZE LEN<<2 - -#ifdef __HIP_PLATFORM_AMD__ -#define fileName "vcpy_isa.co" -#endif - -#ifdef __HIP_PLATFORM_NVIDIA__ -#define fileName "vcpy_isa.ptx" -#endif - -#define kernel_name "hello_world" - -int main(){ - float *A, *B; - hipDeviceptr_t Ad, Bd; - A = new float[LEN]; - B = new float[LEN]; - - for(uint32_t i=0;iargBuffer(2); - memcpy(&argBuffer[0], &Ad, sizeof(void*)); - memcpy(&argBuffer[1], &Bd, sizeof(void*)); - - size_t size = argBuffer.size()*sizeof(void*); - - void *config[] = { - HIP_LAUNCH_PARAM_BUFFER_POINTER, &argBuffer[0], - HIP_LAUNCH_PARAM_BUFFER_SIZE, &size, - HIP_LAUNCH_PARAM_END - }; - - hipModuleLaunchKernel(Function, 1, 1, 1, LEN, 1, 1, 0, 0, NULL, (void**)&config); - - hipMemcpyDtoH(B, Bd, SIZE); - for(uint32_t i=0;i tex; - -__global__ void tex2dKernel(hipLaunchParm lp, float* outputData, - int width, - int height) -{ - int x = blockIdx.x*blockDim.x + threadIdx.x; - int y = blockIdx.y*blockDim.y + threadIdx.y; - outputData[y*width + x] = tex2D(tex, x, y); -} - -``` - -```cpp -// Host code: - -texture tex; - -void myFunc () -{ - // ... - - textureReference* texref; - hipModuleGetTexRef(&texref, Module1, "tex"); - hipTexRefSetAddressMode(texref, 0, hipAddressModeWrap); - hipTexRefSetAddressMode(texref, 1, hipAddressModeWrap); - hipTexRefSetFilterMode(texref, hipFilterModePoint); - hipTexRefSetFlags(texref, 0); - hipTexRefSetFormat(texref, HIP_AD_FORMAT_FLOAT, 1); - hipTexRefSetArray(texref, array, HIP_TRSA_OVERRIDE_FORMAT); - - // ... -} -``` diff --git a/docs/how-to/hip_porting_driver_api.rst b/docs/how-to/hip_porting_driver_api.rst new file mode 100644 index 0000000000..700f568794 --- /dev/null +++ b/docs/how-to/hip_porting_driver_api.rst @@ -0,0 +1,505 @@ +.. meta:: + :description: This chapter presents how to port the CUDA driver API and showcases equivalent operations in HIP. + :keywords: AMD, ROCm, HIP, CUDA, driver API + +.. _porting_driver_api: + +******************************************************************************* +Porting CUDA driver API +******************************************************************************* + +Introduction to the CUDA Driver and Runtime APIs +================================================ + +CUDA provides separate CUDA Driver and Runtime APIs. The two APIs have significant overlap in functionality: + +* Both APIs support events, streams, memory management, memory copy, and error handling. +* Both APIs deliver similar performance. +* Driver APIs calls begin with the prefix ``cu`` while Runtime APIs begin with the prefix ``cuda``. For example, the Driver API contains ``cuEventCreate`` while the Runtime API contains ``cudaEventCreate``, with similar functionality. +* The Driver API defines a different, but largely overlapping, error code space than the Runtime API, and uses a different coding convention. For example, the Driver API defines ``CUDA_ERROR_INVALID_VALUE`` while the Runtime API defines ``cudaErrorInvalidValue`` + +The Driver API offers two additional pieces of functionality not provided by the Runtime API: ``cuModule`` and ``cuCtx`` APIs. + +```` + +``cuModule`` API +---------------- + +The Module section of the Driver API provides additional control over how and when accelerator code objects are loaded. +For example, the driver API allows code objects to be loaded from files or memory pointers. +Symbols for kernels or global data can be extracted from the loaded code objects. +In contrast, the Runtime API automatically loads and -- if necessary -- compiles all of the kernels from an executable binary when run. +In this mode, NVCC must be used to compile kernel code so the automatic loading can function correctly. + +Both Driver and Runtime APIs define a function for launching kernels, called ``cuLaunchKernel`` or ``cudaLaunchKernel``. +The kernel arguments and the execution configuration (grid dimensions, group dimensions, dynamic shared memory, and stream) are passed as arguments to the launch function. +The Runtime additionally provides the ``<<< >>>`` syntax for launching kernels, which resembles a special function call and is easier to use than explicit launch API, in particular with respect to handling of kernel arguments. +However, this syntax is not standard C++ and is available only when NVCC is used to compile the host code. + +The Module features are useful in an environment which generates the code objects directly, such as a new accelerator language front-end. +Here, NVCC is not used. Instead, the environment may have a different kernel language or different compilation flow. +Other environments have many kernels and do not want them to be all loaded automatically. +The Module functions can be used to load the generated code objects and launch kernels. +As we will see below, HIP defines a Module API which provides similar explicit control over code object management. + +``cuCtx`` API +------------- + +The Driver API defines "Context" and "Devices" as separate entities. +Contexts contain a single device, and a device can theoretically have multiple contexts. +Each context contains a set of streams and events specific to the context. +Historically contexts also defined a unique address space for the GPU, though this may no longer be the case in Unified Memory platforms, since the CPU and all the devices in the same process share a single unified address space. +The Context APIs also provide a mechanism to switch between devices, which allowed a single CPU thread to send commands to different GPUs. +HIP as well as recent versions of the CUDA Runtime provide other mechanisms to accomplish this feat - for example using streams or ``cudaSetDevice``. + +The CUDA Runtime API unifies the Context API with the Device API. This simplifies the APIs and has little loss of functionality since each Context can contain a single device, and the benefits of multiple contexts have been replaced with other interfaces. +HIP provides a context API to facilitate easy porting from existing Driver codes. +In HIP, the ``Ctx`` functions largely provide an alternate syntax for changing the active device. + +Most new applications will prefer to use ``hipSetDevice`` or the stream APIs, therefore HIP has marked the ``hipCtx`` APIs as **deprecated**. Support for these APIs may not be available in future releases. For more details on deprecated APIs please refer :doc:`../reference/deprecated_api_list`. + +HIP Module and ``Ctx`` APIs +=========================== + +Rather than present two separate APIs, HIP extends the HIP API with new APIs for Modules and ``Ctx`` control. + +``hipModule`` API +----------------- + +Like the CUDA Driver API, the Module API provides additional control over how code is loaded, including options to load code from files or from in-memory pointers. +NVCC and HIP-Clang target different architectures and use different code object formats: NVCC supports ``cubin`` or ``ptx`` files, while the HIP-Clang path is the ``hsaco`` format. +The external compilers which generate these code objects are responsible for generating and loading the correct code object for each platform. +Notably, there is not a fat binary format that can contain code for both NVCC and HIP-Clang platforms. The following table summarizes the formats used on each platform: + +.. list-table:: Module formats + :header-rows: 1 + + * - Format + - APIs + - NVCC + - HIP-CLANG + * - Code Object + - ``hipModuleLoad``, ``hipModuleLoadData`` + - ``.cubin`` or PTX text + - ``.hsaco`` + * - Fat Binary + - ``hipModuleLoadFatBin`` + - ``.fatbin`` + - ``.hip_fatbin`` + +``hipcc`` uses HIP-Clang or NVCC to compile host code. Both of these may embed code objects into the final executable, and these code objects will be automatically loaded when the application starts. +The ``hipModule`` API can be used to load additional code objects, and in this way provides an extended capability to the automatically loaded code objects. +HIP-Clang allows both of these capabilities to be used together, if desired. Of course it is possible to create a program with no kernels and thus no automatic loading. + +``hipCtx`` API +-------------- + +HIP provides a ``Ctx`` API as a thin layer over the existing Device functions. This ``Ctx`` API can be used to set the current context, or to query properties of the device associated with the context. +The current context is implicitly used by other APIs such as ``hipStreamCreate``. + +HIPIFY translation of CUDA Driver API +------------------------------------- + +The HIPIFY tools convert CUDA Driver APIs for streams, events, modules, devices, memory management, context, profiler to the equivalent HIP driver calls. For example, ``cuEventCreate`` will be translated to ``hipEventCreate``. +HIPIFY tools also convert error codes from the Driver namespace and coding convention to the equivalent HIP error code. Thus, HIP unifies the APIs for these common functions. + +The memory copy API requires additional explanation. The CUDA driver includes the memory direction in the name of the API (``cuMemcpyH2D``) while the CUDA driver API provides a single memory copy API with a parameter that specifies the direction and additionally supports a "default" direction where the runtime determines the direction automatically. +HIP provides APIs with both styles: for example, ``hipMemcpyH2D`` as well as ``hipMemcpy``. +The first flavor may be faster in some cases since they avoid host overhead to detect the different memory directions. + +HIP defines a single error space, and uses camel-case for all errors (i.e. ``hipErrorInvalidValue``). + +For further information, visit the :doc:`hipify:index`. + +Address Spaces +^^^^^^^^^^^^^^ + +HIP-Clang defines a process-wide address space where the CPU and all devices allocate addresses from a single unified pool. +Thus addresses may be shared between contexts, and unlike the original CUDA definition a new context does not create a new address space for the device. + +Using ``hipModuleLaunchKernel`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +``hipModuleLaunchKernel`` is ``cuLaunchKernel`` in HIP world. It takes the same arguments as ``cuLaunchKernel``. + +Additional Information +^^^^^^^^^^^^^^^^^^^^^^ + +* HIP-Clang creates a primary context when the HIP API is called. So in a pure driver API code, HIP-Clang will create a primary context while HIP/NVCC will have an empty context stack. +HIP-Clang will push primary context to context stack when it is empty. This can have subtle differences on applications which mix the runtime and driver APIs. + +``hip-clang`` Implementation Notes +---------------------------------- + +``.hip_fatbin`` +^^^^^^^^^^^^^^^ + +hip-clang links device code from different translation units together. For each device target, a code object is generated. Code objects for different device targets are bundled by ``clang-offload-bundler`` as one fatbinary, which is embedded as a global symbol ``__hip_fatbin`` in the ``.hip_fatbin`` section of the ELF file of the executable or shared object. + +Initialization and Termination Functions +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +hip-clang generates initialization and termination functions for each translation unit for host code compilation. The initialization functions call ``__hipRegisterFatBinary`` to register the fatbinary embedded in the ELF file. They also call ``__hipRegisterFunction`` and ``__hipRegisterVar`` to register kernel functions and device side global variables. The termination functions call ``__hipUnregisterFatBinary``. +hip-clang emits a global variable ``__hip_gpubin_handle`` of ``void**`` type with ``linkonce`` linkage and initial value 0 for each host translation unit. Each initialization function checks ``__hip_gpubin_handle`` and register the fatbinary only if ``__hip_gpubin_handle`` is 0 and saves the return value of ``__hip_gpubin_handle`` to ``__hip_gpubin_handle``. This is to guarantee that the fatbinary is only registered once. Similar check is done in the termination functions. + +Kernel Launching +^^^^^^^^^^^^^^^^ + +hip-clang supports kernel launching via the CUDA ``<<<>>>`` syntax, ``hipLaunchKernel`` or ``hipLaunchKernelGGL``. The latter one is a macro which expands to the CUDA ``<<<>>>`` syntax by default, or can be turned into a template by defining ``HIP_TEMPLATE_KERNEL_LAUNCH``. + +When the executable or shared library is loaded by the dynamic linker, the initialization functions are called. In the initialization functions, when ``__hipRegisterFatBinary`` is called, the code objects containing all kernels are loaded; when ``__hipRegisterFunction`` is called, the stub functions are associated with the corresponding kernels in code objects. + +hip-clang implements two sets of kernel launching APIs. + +By default, in the host code, for the ``<<<>>>`` statement, hip-clang first calls ``hipConfigureCall`` to set up the threads and grids, then calls the stub function with the given arguments. In the stub function, ``hipSetupArgument`` is called for each kernel argument, then ``hipLaunchByPtr`` is called with a function pointer to the stub function. In ``hipLaunchByPtr``, the real kernel associated with the stub function is launched. + +NVCC Implementation Notes +------------------------- + +Inter-operation between HIP and CUDA Driver +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +CUDA applications may want to mix CUDA driver code with HIP code (see example below). This table shows the type equivalence to enable this interaction. + +.. list-table:: Equivalence table between HIP and CUDA types + :header-rows: 1 + + * - HIP Type + - CU Driver Type + - CUDA Runtime Type + * - ``hipModule_t`` + - ``CUmodule`` + - + * - ``hipFunction_t`` + - ``CUfunction`` + - + * - ``hipCtx_t`` + - ``CUcontext`` + - + * - ``hipDevice_t`` + - ``CUdevice`` + - + * - ``hipStream_t`` + - ``CUstream`` + - ``cudaStream_t`` + * - ``hipEvent_t`` + - ``CUevent`` + - ``cudaEvent_t`` + * - ``hipArray`` + - ``CUarray`` + - ``cudaArray`` + +Compilation Options +^^^^^^^^^^^^^^^^^^^ + +The ``hipModule_t`` interface does not support the ``cuModuleLoadDataEx`` function, which is used to control PTX compilation options. +HIP-Clang does not use PTX and does not support these compilation options. +In fact, HIP-Clang code objects always contain fully compiled code for a devices specific instruction set and do not require additional compilation as a part of the load step. +The corresponding HIP function ``hipModuleLoadDataEx`` behaves as ``hipModuleLoadData`` on the HIP-Clang path (compilation options are not used) and as ``cuModuleLoadDataEx`` on the NVCC path. + +For example: + +.. tab-set:: + + .. tab-item:: HIP + + .. code-block:: cpp + + hipModule_t module; + void *imagePtr = ...; // Somehow populate data pointer with code object + + const int numOptions = 1; + hipJitOption options[numOptions]; + void *optionValues[numOptions]; + + options[0] = hipJitOptionMaxRegisters; + unsigned maxRegs = 15; + optionValues[0] = (void *)(&maxRegs); + + // hipModuleLoadData(module, imagePtr) will be called on HIP-Clang path, JIT + // options will not be used, and cupModuleLoadDataEx(module, imagePtr, + // numOptions, options, optionValues) will be called on NVCC path + hipModuleLoadDataEx(module, imagePtr, numOptions, options, optionValues); + + hipFunction_t k; + hipModuleGetFunction(&k, module, "myKernel"); + + .. tab-item:: CUDA + + .. code-block:: cpp + + CUmodule module; + void *imagePtr = ...; // Somehow populate data pointer with code object + + const int numOptions = 1; + CUJit_option options[numOptions]; + void *optionValues[numOptions]; + + options[0] = CU_JIT_MAX_REGISTERS; + unsigned maxRegs = 15; + optionValues[0] = (void *)(&maxRegs); + + cuModuleLoadDataEx(module, imagePtr, numOptions, options, optionValues); + + CUfunction k; + cuModuleGetFunction(&k, module, "myKernel"); + +The below sample shows how to use ``hipModuleGetFunction``. + +.. code-block:: cpp + + #include + #include + + #include + #include + #include + + #define LEN 64 + #define SIZE LEN << 2 + + #ifdef __HIP_PLATFORM_AMD__ + #define fileName "vcpy_isa.co" + #endif + + #ifdef __HIP_PLATFORM_NVIDIA__ + #define fileName "vcpy_isa.ptx" + #endif + + #define kernel_name "hello_world" + + int main() { + float *A, *B; + hipDeviceptr_t Ad, Bd; + A = new float[LEN]; + B = new float[LEN]; + + for (uint32_t i = 0; i < LEN; i++) { + A[i] = i * 1.0f; + B[i] = 0.0f; + std::cout << A[i] << " " << B[i] << std::endl; + } + + #ifdef __HIP_PLATFORM_NVIDIA__ + hipInit(0); + hipDevice_t device; + hipCtx_t context; + hipDeviceGet(&device, 0); + hipCtxCreate(&context, 0, device); + #endif + + hipMalloc((void **)&Ad, SIZE); + hipMalloc((void **)&Bd, SIZE); + + hipMemcpyHtoD(Ad, A, SIZE); + hipMemcpyHtoD(Bd, B, SIZE); + hipModule_t Module; + hipFunction_t Function; + hipModuleLoad(&Module, fileName); + hipModuleGetFunction(&Function, Module, kernel_name); + + std::vector argBuffer(2); + memcpy(&argBuffer[0], &Ad, sizeof(void *)); + memcpy(&argBuffer[1], &Bd, sizeof(void *)); + + size_t size = argBuffer.size() * sizeof(void *); + + void *config[] = {HIP_LAUNCH_PARAM_BUFFER_POINTER, &argBuffer[0], + HIP_LAUNCH_PARAM_BUFFER_SIZE, &size, HIP_LAUNCH_PARAM_END}; + + hipModuleLaunchKernel(Function, 1, 1, 1, LEN, 1, 1, 0, 0, NULL, + (void **)&config); + + hipMemcpyDtoH(B, Bd, SIZE); + for (uint32_t i = 0; i < LEN; i++) { + std::cout << A[i] << " - " << B[i] << std::endl; + } + + #ifdef __HIP_PLATFORM_NVIDIA__ + hipCtxDetach(context); + #endif + + return 0; + } + +HIP Module and Texture Driver API +================================= + +HIP supports texture driver APIs however texture reference should be declared in host scope. Following code explains the use of texture reference for ``__HIP_PLATFORM_AMD__`` platform. + +.. code-block:: cpp + + // Code to generate code object + + #include "hip/hip_runtime.h" + extern texture tex; + + __global__ void tex2dKernel(hipLaunchParm lp, float *outputData, int width, + int height) { + int x = blockIdx.x * blockDim.x + threadIdx.x; + int y = blockIdx.y * blockDim.y + threadIdx.y; + outputData[y * width + x] = tex2D(tex, x, y); + } + +.. code-block:: cpp + + // Host code: + + texture tex; + + void myFunc () + { + // ... + + textureReference* texref; + hipModuleGetTexRef(&texref, Module1, "tex"); + hipTexRefSetAddressMode(texref, 0, hipAddressModeWrap); + hipTexRefSetAddressMode(texref, 1, hipAddressModeWrap); + hipTexRefSetFilterMode(texref, hipFilterModePoint); + hipTexRefSetFlags(texref, 0); + hipTexRefSetFormat(texref, HIP_AD_FORMAT_FLOAT, 1); + hipTexRefSetArray(texref, array, HIP_TRSA_OVERRIDE_FORMAT); + + // ... + } + +Driver Entry Point Access +========================= + +Starting from HIP version 6.2.0, support for Driver Entry Point Access is available when using CUDA 12.0 or newer. This feature allows developers to directly interact with the CUDA driver API, providing more control over GPU operations. + +Driver Entry Point Access provides several opportunities: + +* Retrieving the address of a runtime function +* Requesting per-thread default stream version +* Accessing new HIP features on older toolkits with a newer driver + +Address retrieval +----------------- + +The function ``hipGetProcAddress`` can be used to obtain the address of a runtime function. This is demonstrated in the following example: + +.. code-block:: cpp + + #include + #include + + #include + + typedef hipError_t (*hipInit_t)(unsigned int); + + int main() { + // Initialize the HIP runtime + hipError_t res = hipInit(0); + if (res != hipSuccess) { + std::cerr << "Failed to initialize HIP runtime." << std::endl; + return 1; + } + + // Get the address of the hipInit function + hipInit_t hipInitFunc; + int hipVersion = HIP_VERSION; // Use the HIP version defined in hip_runtime_api.h + uint64_t flags = 0; // No special flags + hipDriverProcAddressQueryResult symbolStatus; + + res = hipGetProcAddress("hipInit", (void**)&hipInitFunc, hipVersion, flags, &symbolStatus); + if (res != hipSuccess) { + std::cerr << "Failed to get address of hipInit()." << std::endl; + return 1; + } + + // Call the hipInit function using the obtained address + res = hipInitFunc(0); + if (res == hipSuccess) { + std::cout << "HIP runtime initialized successfully using hipGetProcAddress()." << std::endl; + } else { + std::cerr << "Failed to initialize HIP runtime using hipGetProcAddress()." << std::endl; + } + + return 0; + } + +Per-thread default stream version request +----------------------------------------- + +HIP offers functionality similar to CUDA for managing streams on a per-thread basis. By using ``hipStreamPerThread``, each thread can independently manage its default stream, simplifying operations. The following example demonstrates how this feature enhances performance by reducing contention and improving efficiency. + +.. code-block:: cpp + + #include + + #include + + int main() { + // Initialize the HIP runtime + hipError_t res = hipInit(0); + if (res != hipSuccess) { + std::cerr << "Failed to initialize HIP runtime." << std::endl; + return 1; + } + + // Get the per-thread default stream + hipStream_t stream = hipStreamPerThread; + + // Use the stream for some operation + // For example, allocate memory on the device + void* d_ptr; + size_t size = 1024; + res = hipMalloc(&d_ptr, size); + if (res != hipSuccess) { + std::cerr << "Failed to allocate memory." << std::endl; + return 1; + } + + // Perform some operation using the stream + // For example, set memory on the device + res = hipMemsetAsync(d_ptr, 0, size, stream); + if (res != hipSuccess) { + std::cerr << "Failed to set memory." << std::endl; + return 1; + } + + // Synchronize the stream + res = hipStreamSynchronize(stream); + if (res != hipSuccess) { + std::cerr << "Failed to synchronize stream." << std::endl; + return 1; + } + + std::cout << "Operation completed successfully using per-thread default stream." << std::endl; + + // Free the allocated memory + hipFree(d_ptr); + + return 0; + } + +Access to new HIP features with a newer driver +---------------------------------------------- + +HIP is designed with forward compatibility, allowing newer features to be utilized with older toolkits, provided a compatible driver is present. Feature support can be verified through runtime API functions and version checks. This approach ensures that applications can benefit from new features and improvements in the HIP runtime without the need for recompilation with a newer toolkit. The function ``hipGetProcAddress`` enables dynamic querying and usage of newer functions offered by the HIP runtime, even if the application was built with an older toolkit. + +An example is provided for a hypothetic ``foo()`` function. + +.. code-block:: cpp + + // Get the address of the foo function + foo_t fooFunc; + int hipVersion = 60300000; // Use an own HIP version number (e.g. 6.3.0) + uint64_t flags = 0; // No special flags + hipDriverProcAddressQueryResult symbolStatus; + + res = hipGetProcAddress("foo", (void**)&fooFunc, hipVersion, flags, &symbolStatus); + +The HIP version number is defined as an integer: + +.. code-block:: cpp + + HIP_VERSION=HIP_VERSION_MAJOR * 10000000 + HIP_VERSION_MINOR * 100000 + HIP_VERSION_PATCH + +For further details, see :doc:`../how-to/faq`. + +Reference +========= + +For driver API reference, visit :ref:`driver_api_reference`. diff --git a/docs/index.md b/docs/index.md index 9e0c22fa8d..5e1078b76b 100644 --- a/docs/index.md +++ b/docs/index.md @@ -32,6 +32,7 @@ On non-AMD platforms, like NVIDIA, HIP provides header files required to support * {doc}`./understand/programming_model` * {doc}`./understand/hardware_implementation` * {doc}`./understand/amd_clr` +* {doc}`./understand/driver_api` ::: @@ -56,6 +57,7 @@ On non-AMD platforms, like NVIDIA, HIP provides header files required to support * {doc}`/doxygen/html/index` * [C++ language extensions](./reference/cpp_language_extensions) * [C++ language support](./reference/cpp_language_support) +* [Driver API](./reference/driver_api_reference) * [HIP math API](./reference/math_api) * [Comparing syntax for different APIs](./reference/terms) * [HSA runtime API for ROCm](./reference/virtual_rocr) diff --git a/docs/reference/driver_api_reference.rst b/docs/reference/driver_api_reference.rst new file mode 100644 index 0000000000..1ae18ca4bd --- /dev/null +++ b/docs/reference/driver_api_reference.rst @@ -0,0 +1,34 @@ +.. meta:: + :description: This chapter will present CUDA driver API porting and showcase equivalent operations in HIP. + :keywords: AMD, ROCm, HIP, CUDA, driver API + +.. _driver_api_reference: + +******************************************************************************* +Driver API reference +******************************************************************************* + +On this page we present a list of driver API functions supported by HIP. + +For comparison, compatibility and version information with the CUDA Driver API, please visit :doc:`hipify:index`. + +HIP driver API +============== + +Context +------- + +.. doxygengroup:: Context + :content-only: + +Module +------ + +.. doxygengroup:: Module + :content-only: + +Driver Entry Point Access +------------------------- + +.. doxygenfunction:: hipGetProcAddress + :content-only: diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index 850fde34e1..103092bfb2 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -18,6 +18,7 @@ subtrees: - file: understand/programming_model - file: understand/hardware_implementation - file: understand/amd_clr + - file: understand/driver_api - caption: How to entries: @@ -42,6 +43,7 @@ subtrees: title: C++ language extensions - file: reference/cpp_language_support title: C++ language support + - file: reference/driver_api_reference - file: reference/math_api - file: reference/terms title: Comparing syntax for different APIs diff --git a/docs/understand/driver_api.rst b/docs/understand/driver_api.rst new file mode 100644 index 0000000000..ac7cbbe066 --- /dev/null +++ b/docs/understand/driver_api.rst @@ -0,0 +1,21 @@ +.. meta:: + :description: This chapter will present the conception of driver API. + :keywords: AMD, ROCm, HIP, CUDA, driver API + +.. _driver_api: + +******************************************************************************* +Driver API +******************************************************************************* + +The driver API offers developers low-level control over GPU operations, enabling them to manage GPU resources, load and launch kernels, and handle memory explicitly. This API is more flexible and powerful compared to the runtime API, but it requires a deeper understanding of the GPU architecture and more detailed management. + +One significant advantage of the driver API is its ability to dynamically load and manage code objects, which is particularly useful for applications that need to generate or modify kernels at runtime. This flexibility allows for more sophisticated and adaptable GPU programming. + +Memory management with the driver API involves explicit allocation, de-allocation, and data transfer operations. This level of control can lead to optimized performance for specific applications, as developers can fine-tune memory usage. However, it also demands careful handling to avoid memory leaks and ensure efficient memory utilization. + +Unlike the runtime API, the driver API does not automatically handle tasks such as context creation and kernel loading. While the runtime API is more convenient and easier to use for most applications, the driver API provides greater control and can be more efficient for complex or performance-critical applications. + +Using the driver API can result in longer development times due to the need for more detailed code and explicit management. However, the actual runtime performance can be similar to or even better than the runtime API, depending on how well the application is optimized. + +While AMD HIP does not have a direct equivalent to CUDA's Driver API, it supports driver API functionalities, such as managing contexts, modules, memory, and driver entry point access. These features are detailed in :ref:`porting_driver_api`, and described in :ref:`driver_api_reference`.