Skip to content

Commit

Permalink
Deploying to gh-pages from @ b92a351 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
facebook-github-bot committed Jul 7, 2023
1 parent c4b90a4 commit 1f5a621
Showing 1 changed file with 6 additions and 6 deletions.
12 changes: 6 additions & 6 deletions cpp-api/quantize_ops.html
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,7 @@ <h2>CUDA Operators<a class="headerlink" href="#cuda-operators" title="Permalink

<dl class="cpp function">
<dt>
<span class="target" id="group__quantize-data-cuda_1gac0c20377454dbfafcc5ac245fe6427ce"></span><code class="sig-name descname">DLL_PUBLIC at::Tensor _msfp_to_float_gpu (const at::Tensor &amp;input, const int64_t ebits, const int64_t mbits, const int64_t bias)</code></dt>
<span class="target" id="group__quantize-data-cuda_1ga31b9029d43a60ad1fc90dc6ec54af9db"></span><code class="sig-name descname">DLL_PUBLIC Tensor _float_to_FP8rowwise_gpu (const Tensor &amp;input, const bool forward)</code></dt>
<dd></dd></dl>

<dl class="cpp function">
Expand All @@ -363,11 +363,6 @@ <h2>CUDA Operators<a class="headerlink" href="#cuda-operators" title="Permalink
<span class="target" id="group__quantize-data-cuda_1ga4520e7823d5bfd90451f544269ee308f"></span><code class="sig-name descname">DLL_PUBLIC Tensor _float_or_half_to_fused8bitrowwise_gpu (const Tensor &amp;input)</code></dt>
<dd></dd></dl>

<dl class="cpp function">
<dt>
<span class="target" id="group__quantize-data-cuda_1ga31b9029d43a60ad1fc90dc6ec54af9db"></span><code class="sig-name descname">DLL_PUBLIC Tensor _float_to_FP8rowwise_gpu (const Tensor &amp;input, const bool forward)</code></dt>
<dd></dd></dl>

<dl class="cpp function">
<dt>
<span class="target" id="group__quantize-data-cuda_1ga5e1238b208e88eb23e7f1b7a21ab4eff"></span><code class="sig-name descname">DLL_PUBLIC at::Tensor _fused8bitrowwise_to_float_or_half_gpu (const at::Tensor &amp;input, const int64_t output_dtype)</code></dt>
Expand Down Expand Up @@ -408,6 +403,11 @@ <h2>CUDA Operators<a class="headerlink" href="#cuda-operators" title="Permalink
<span class="target" id="group__quantize-data-cuda_1ga07f4c02c95710472b815bdc1d7bfff19"></span><code class="sig-name descname">DLL_PUBLIC at::Tensor _fusednbitrowwise_to_float_or_half_gpu (const at::Tensor &amp;input, const int64_t bit_rate, const int64_t output_dtype)</code></dt>
<dd></dd></dl>

<dl class="cpp function">
<dt>
<span class="target" id="group__quantize-data-cuda_1gac0c20377454dbfafcc5ac245fe6427ce"></span><code class="sig-name descname">DLL_PUBLIC at::Tensor _msfp_to_float_gpu (const at::Tensor &amp;input, const int64_t ebits, const int64_t mbits, const int64_t bias)</code></dt>
<dd></dd></dl>

<dl class="cpp function">
<dt>
<span class="target" id="group__quantize-data-cuda_1ga5043927653e4d50462b79b7f3df33223"></span><code class="sig-name descname">DLL_PUBLIC Tensor _float_to_paddedFP8rowwise_gpu (const Tensor &amp;input, const bool forward, const int64_t row_dim)</code></dt>
Expand Down

0 comments on commit 1f5a621

Please sign in to comment.