Skip to content

Commit

Permalink
Deploying to gh-pages from @ d810616 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
facebook-github-bot committed Jan 10, 2024
1 parent aa6a1b9 commit 9223069
Show file tree
Hide file tree
Showing 7 changed files with 116 additions and 116 deletions.
14 changes: 7 additions & 7 deletions fbgemm_gpu-cpp-api/quantize_ops.html
Original file line number Diff line number Diff line change
Expand Up @@ -594,8 +594,8 @@ <h2>CUDA Operators<a class="headerlink" href="#cuda-operators" title="Permalink
</dd></dl>

<dl class="cpp function">
<dt class="sig sig-object cpp" id="_CPPv438_float_or_half_to_fusednbitrowwise_gpuRK6TensorK7int64_t">
<span id="_CPPv338_float_or_half_to_fusednbitrowwise_gpuRK6TensorK7int64_t"></span><span id="_CPPv238_float_or_half_to_fusednbitrowwise_gpuRK6TensorK7int64_t"></span><span id="_float_or_half_to_fusednbitrowwise_gpu__TensorCR.int64_tC"></span><span class="target" id="group__quantize-ops-cuda_1gaf3fe7807d87eb26147f82d4f9e5d1c9b"></span><span class="n"><span class="pre">Tensor</span></span><span class="w"> </span><span class="sig-name descname"><span class="n"><span class="pre">_float_or_half_to_fusednbitrowwise_gpu</span></span></span><span class="sig-paren">(</span><span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">Tensor</span></span><span class="w"> </span><span class="p"><span class="pre">&amp;</span></span><span class="n sig-param"><span class="pre">input</span></span>, <span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">int64_t</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">bit_rate</span></span><span class="sig-paren">)</span><a class="headerlink" href="#_CPPv438_float_or_half_to_fusednbitrowwise_gpuRK6TensorK7int64_t" title="Permalink to this definition"></a><br /></dt>
<dt class="sig sig-object cpp" id="_CPPv449_single_or_half_precision_to_fusednbitrowwise_gpuRK6TensorK7int64_t">
<span id="_CPPv349_single_or_half_precision_to_fusednbitrowwise_gpuRK6TensorK7int64_t"></span><span id="_CPPv249_single_or_half_precision_to_fusednbitrowwise_gpuRK6TensorK7int64_t"></span><span id="_single_or_half_precision_to_fusednbitrowwise_gpu__TensorCR.int64_tC"></span><span class="target" id="group__quantize-ops-cuda_1gaae960240a6e2884a30205ed3fa9d3111"></span><span class="n"><span class="pre">Tensor</span></span><span class="w"> </span><span class="sig-name descname"><span class="n"><span class="pre">_single_or_half_precision_to_fusednbitrowwise_gpu</span></span></span><span class="sig-paren">(</span><span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">Tensor</span></span><span class="w"> </span><span class="p"><span class="pre">&amp;</span></span><span class="n sig-param"><span class="pre">input</span></span>, <span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">int64_t</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">bit_rate</span></span><span class="sig-paren">)</span><a class="headerlink" href="#_CPPv449_single_or_half_precision_to_fusednbitrowwise_gpuRK6TensorK7int64_t" title="Permalink to this definition"></a><br /></dt>
<dd><p>Converts a tensor of <code class="docutils literal notranslate"><span class="pre">float</span></code> or <code class="docutils literal notranslate"><span class="pre">at::Half</span></code> values into a tensor of fused N-bit rowwise values.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
Expand Down Expand Up @@ -645,9 +645,9 @@ <h2>CUDA Operators<a class="headerlink" href="#cuda-operators" title="Permalink
</dd></dl>

<dl class="cpp function">
<dt class="sig sig-object cpp" id="_CPPv438_fusednbitrowwise_to_float_or_half_gpuRKN2at6TensorEK7int64_tK7int64_t">
<span id="_CPPv338_fusednbitrowwise_to_float_or_half_gpuRKN2at6TensorEK7int64_tK7int64_t"></span><span id="_CPPv238_fusednbitrowwise_to_float_or_half_gpuRKN2at6TensorEK7int64_tK7int64_t"></span><span id="_fusednbitrowwise_to_float_or_half_gpu__at::TensorCR.int64_tC.int64_tC"></span><span class="target" id="group__quantize-ops-cuda_1ga7b611e06ca608cb798e12005e365149b"></span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="w"> </span><span class="sig-name descname"><span class="n"><span class="pre">_fusednbitrowwise_to_float_or_half_gpu</span></span></span><span class="sig-paren">(</span><span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="w"> </span><span class="p"><span class="pre">&amp;</span></span><span class="n sig-param"><span class="pre">input</span></span>, <span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">int64_t</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">bit_rate</span></span>, <span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">int64_t</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">output_dtype</span></span><span class="sig-paren">)</span><a class="headerlink" href="#_CPPv438_fusednbitrowwise_to_float_or_half_gpuRKN2at6TensorEK7int64_tK7int64_t" title="Permalink to this definition"></a><br /></dt>
<dd><p>Converts a tensor of fused N-bit rowwise values into a tensor of <code class="docutils literal notranslate"><span class="pre">float</span></code> or <code class="docutils literal notranslate"><span class="pre">at::Half</span></code> values.</p>
<dt class="sig sig-object cpp" id="_CPPv449_fusednbitrowwise_to_single_or_half_precision_gpuRKN2at6TensorEK7int64_tK7int64_t">
<span id="_CPPv349_fusednbitrowwise_to_single_or_half_precision_gpuRKN2at6TensorEK7int64_tK7int64_t"></span><span id="_CPPv249_fusednbitrowwise_to_single_or_half_precision_gpuRKN2at6TensorEK7int64_tK7int64_t"></span><span id="_fusednbitrowwise_to_single_or_half_precision_gpu__at::TensorCR.int64_tC.int64_tC"></span><span class="target" id="group__quantize-ops-cuda_1ga5ee877a3c135bd5160991c77ed170b23"></span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="w"> </span><span class="sig-name descname"><span class="n"><span class="pre">_fusednbitrowwise_to_single_or_half_precision_gpu</span></span></span><span class="sig-paren">(</span><span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="w"> </span><span class="p"><span class="pre">&amp;</span></span><span class="n sig-param"><span class="pre">input</span></span>, <span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">int64_t</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">bit_rate</span></span>, <span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">int64_t</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">output_dtype</span></span><span class="sig-paren">)</span><a class="headerlink" href="#_CPPv449_fusednbitrowwise_to_single_or_half_precision_gpuRKN2at6TensorEK7int64_tK7int64_t" title="Permalink to this definition"></a><br /></dt>
<dd><p>Converts a tensor of fused N-bit rowwise values into a tensor of <code class="docutils literal notranslate"><span class="pre">float</span></code> or <code class="docutils literal notranslate"><span class="pre">at::Half</span></code> or <code class="docutils literal notranslate"><span class="pre">at::Bf16</span></code> values.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><ul class="simple">
Expand All @@ -657,10 +657,10 @@ <h2>CUDA Operators<a class="headerlink" href="#cuda-operators" title="Permalink
</ul>
</dd>
<dt class="field-even">Throws<span class="colon">:</span></dt>
<dd class="field-even"><p><span><span class="cpp-expr sig sig-inline cpp"><span class="n">c10</span><span class="p">::</span><a class="reference external" href="https://pytorch.org/cppdocs/api/classc10_1_1_error.html#_CPPv4N3c105ErrorE" title="(in PyTorch vmain)"><span class="n">Error</span></a></span></span> – if <code class="docutils literal notranslate"><span class="pre">output_dtype</span></code> is not one of (<code class="docutils literal notranslate"><span class="pre">SparseType::FP32</span></code> or <code class="docutils literal notranslate"><span class="pre">SparseType::FP16</span></code>). </p>
<dd class="field-even"><p><span><span class="cpp-expr sig sig-inline cpp"><span class="n">c10</span><span class="p">::</span><a class="reference external" href="https://pytorch.org/cppdocs/api/classc10_1_1_error.html#_CPPv4N3c105ErrorE" title="(in PyTorch vmain)"><span class="n">Error</span></a></span></span> – if <code class="docutils literal notranslate"><span class="pre">output_dtype</span></code> is not one of (<code class="docutils literal notranslate"><span class="pre">SparseType::FP32</span></code> or <code class="docutils literal notranslate"><span class="pre">SparseType::FP16</span></code> or <code class="docutils literal notranslate"><span class="pre">SparseType::BF16</span></code>). </p>
</dd>
<dt class="field-odd">Returns<span class="colon">:</span></dt>
<dd class="field-odd"><p>A new tensor with values from the input tensor converted to <code class="docutils literal notranslate"><span class="pre">float</span></code> or <code class="docutils literal notranslate"><span class="pre">at::Half</span></code>, depending on <code class="docutils literal notranslate"><span class="pre">output_dtype</span></code>.</p>
<dd class="field-odd"><p>A new tensor with values from the input tensor converted to <code class="docutils literal notranslate"><span class="pre">float</span></code> or <code class="docutils literal notranslate"><span class="pre">at::Half</span></code> or <code class="docutils literal notranslate"><span class="pre">at::Bf16</span></code>, depending on <code class="docutils literal notranslate"><span class="pre">output_dtype</span></code>.</p>
</dd>
</dl>
</dd></dl>
Expand Down
12 changes: 6 additions & 6 deletions genindex.html
Original file line number Diff line number Diff line change
Expand Up @@ -397,8 +397,6 @@ <h2 id="_">_</h2>
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv422_bfloat16_to_float_gpuRKN2at6TensorE">_bfloat16_to_float_gpu (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv438_float_or_half_to_fusednbitrowwise_gpuRK6TensorK7int64_t">_float_or_half_to_fusednbitrowwise_gpu (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv422_float_to_bfloat16_gpuRKN2at6TensorE">_float_to_bfloat16_gpu (C++ function)</a>
</li>
Expand All @@ -420,21 +418,21 @@ <h2 id="_">_</h2>
</li>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv434_fused8bitrowwise_to_float_cpu_outR6TensorRK6Tensor">_fused8bitrowwise_to_float_cpu_out (C++ function)</a>
</li>
</ul></td>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv430_fused8bitrowwise_to_float_gpuRKN2at6TensorE">_fused8bitrowwise_to_float_gpu (C++ function)</a>
</li>
</ul></td>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv440_fused8bitrowwise_to_float_mixed_dim_gpuRKN2at6TensorERKN2at6TensorEK7int64_t">_fused8bitrowwise_to_float_mixed_dim_gpu (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv429_fused8bitrowwise_to_half_gpuRKN2at6TensorE">_fused8bitrowwise_to_half_gpu (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv449_fused8bitrowwise_to_single_or_half_precision_gpuRKN2at6TensorEK7int64_t">_fused8bitrowwise_to_single_or_half_precision_gpu (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv430_fusednbitrowwise_to_float_gpuRKN2at6TensorEK7int64_t">_fusednbitrowwise_to_float_gpu (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv438_fusednbitrowwise_to_float_or_half_gpuRKN2at6TensorEK7int64_tK7int64_t">_fusednbitrowwise_to_float_or_half_gpu (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv429_fusednbitrowwise_to_half_gpuRKN2at6TensorEK7int64_t">_fusednbitrowwise_to_half_gpu (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv449_fusednbitrowwise_to_single_or_half_precision_gpuRKN2at6TensorEK7int64_tK7int64_t">_fusednbitrowwise_to_single_or_half_precision_gpu (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv429_half_to_fused8bitrowwise_gpuRK6Tensor">_half_to_fused8bitrowwise_gpu (C++ function)</a>
</li>
Expand All @@ -447,6 +445,8 @@ <h2 id="_">_</h2>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv430_paddedFP8rowwise_to_float_gpuRKN2at6TensorEKbK7int64_tK7int64_tK7int64_t">_paddedFP8rowwise_to_float_gpu (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv449_single_or_half_precision_to_fused8bitrowwise_gpuRK6Tensor">_single_or_half_precision_to_fused8bitrowwise_gpu (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/quantize_ops.html#_CPPv449_single_or_half_precision_to_fusednbitrowwise_gpuRK6TensorK7int64_t">_single_or_half_precision_to_fusednbitrowwise_gpu (C++ function)</a>
</li>
</ul></td>
</tr></table>
Expand Down
Loading

0 comments on commit 9223069

Please sign in to comment.